47 research outputs found

    Explicit Forgetting Algorithms for Memory Based Learning

    Get PDF
    Memory-based learning algorithms lack a mechanism for tracking time-varying associative mappings. To widen their applicability, they must incorporate explicit forgetting algorithms to selectively delete observations. We describe Time-Weighted, Locally-Weighted and Performance-Error Weighted forgetting algorithms. These were evaluated with a Nearest-Neighbor Learner in a simple classification task. Locally-Weighted Forgetting outperformed Time-Weighted Forgetting under time-varying sampling distributions and mappings, and did equally well when only the mapping varied. Performance-Error forgetting tracked about as well as the other algorithms, but was superior since it permitted the Nearest-Neighbor learner to approach the Bayes\u27 misclassification rate when the input-output mapping became stationary

    A Robotic System for Learning Visually-Driven Grasp Planning (Dissertation Proposal)

    Get PDF
    We use findings in machine learning, developmental psychology, and neurophysiology to guide a robotic learning system\u27s level of representation both for actions and for percepts. Visually-driven grasping is chosen as the experimental task since it has general applicability and it has been extensively researched from several perspectives. An implementation of a robotic system with a gripper, compliant instrumented wrist, arm and vision is used to test these ideas. Several sensorimotor primitives (vision segmentation and manipulatory reflexes) are implemented in this system and may be thought of as the innate perceptual and motor abilities of the system. Applying empirical learning techniques to real situations brings up such important issues as observation sparsity in high-dimensional spaces, arbitrary underlying functional forms of the reinforcement distribution and robustness to noise in exemplars. The well-established technique of non-parametric projection pursuit regression (PPR) is used to accomplish reinforcement learning by searching for projections of high-dimensional data sets that capture task invariants. We also pursue the following problem: how can we use human expertise and insight into grasping to train a system to select both appropriate hand preshapes and approaches for a wide variety of objects, and then have it verify and refine its skills through trial and error. To accomplish this learning we propose a new class of Density Adaptive reinforcement learning algorithms. These algorithms use statistical tests to identify possibly interesting regions of the attribute space in which the dynamics of the task change. They automatically concentrate the building of high resolution descriptions of the reinforcement in those areas, and build low resolution representations in regions that are either not populated in the given task or are highly uniform in outcome. Additionally, the use of any learning process generally implies failures along the way. Therefore, the mechanics of the untrained robotic system must be able to tolerate mistakes during learning and not damage itself. We address this by the use of an instrumented, compliant robot wrist that controls impact forces

    Displays for telemanipulation

    Get PDF
    Visual displays drive the human operator's highest bandwidth sensory input channel. Thus, no telemanipulation system is adequate which does not make extensive use of visual displays. Although an important use of visual displays is the presentation of a televised image of the work scene, visual displays are examined for presentation of nonvisual information (forces and torques) for simulation and planning, and for management and control of the large numbers of subsystems which make up a modern telemanipulation system

    Sensorimotor Learning Using Active Perception in Continuous Domains

    Get PDF
    We propose that some aspects of task based learning in robotics can be approached using nativist and constructivist views on human sensorimotor development as a metaphor. We use findings in developmental psychology, neurophysiology, and machine perception to guide a robotic learning system\u27s level of representation both for actions and for percepts. Visually driven grasping is chosen as the experimental task since it has general applicability and it has been extensively researched from several perspectives. An implementation of a robotic system with a dexterous three fingered hand, compliant instrumented wrist, arm and vision is used to test these ideas. Several sensorimotor primitives (vision segmentation and manipulatory reflexes) are implemented in this system and may be though of as the innate perceptual and motor abilities of the system. Applying empirical learning techniques to real situations brings up some important issues such as observation sparsity in high dimensional spaces, arbitrary underlying functional forms of the reinforcement distribution and robustness to noise in exemplars. The well established technique of non-parametric projection pursuit regression (PPR) is used to accomplish reinforcement learning by searching for generalization directions determining projections of high dimensional data sets which capture task invariants. Additionally, the learning process generally implies failures along the way. Therefore, the mechanics of the untrained robotic system must be able to tolerate grave mistakes during learning and not damage itself. We address this by the use of an instrumented compliant robot wrist which controls impact forces

    Robotic Sensorimotor Learning in Continuous Domains

    Get PDF
    We propose that some aspects of task based learning in robotics can be approached using nativist and constructivist views on human sensorimotor development as a metaphor. We use findings in developmental psychology, neurophysiology, and machine perception to guide a robotic learning system\u27s level of representation both for actions and for percepts. Visually driven grasping is chosen as the experimental task since it has general applicability and it has been extensively researched from several perspectives. An implementation of a robotic system with a dexterous three fingered hand, compliant instrumented wrist, arm and vision is used to test these ideas. Several sensorimotor primitives (vision segmentation and manipulatory reflexes) are implemented in this system and may be thought of as the innate perceptual and motor abilities of the system. Applying empirical learning techniques to real situations brings up some important issues such as observation sparsity in high dimensional spaces, arbitrary underlying functional forms of the reinforcement distribution and robustness to noise in exemplars. The well established technique of non-parametric projection pursuit regression (PPR) is used to accomplish reinforcement learning by searching for generalization directions determining projections of high dimensional data sets which capture task invariants. Additionally, the learning process generally implies failures along the way. Therefore, the mechanics of the untrained robotic system must be able to tolerate grave mistakes during learning and not damage itself. We address this by the use of an instrumented compliant robot wrist which controls impact forces

    Learning for Coordination of Vision and Action

    Get PDF
    We define the problem of visuomotor coordination and identify bottleneck problems in the implementation of general purpose vision and action systems. We conjecture that machine learning methods provide a general purpose mechanism for combining specific visual and action modules in a task-independent way. We also maintain that successful learning systems reflect realities of the environment, exploit context information, and identify limitations in perceptual algorithms which cannot be captured by the designer. We then propose a multi-step find-and-fetch mobile robot search and retrieval task. This task illustrates where current learning approaches provide solutions and where future research opportunities exist

    A Vision-Based Learning Method for Pushing Manipulation

    Get PDF
    We describe an unsupervised on-line method for learning of manipulative actions that allows a robot to push an object connected to it with a rotational point contact to a desired point in image-space. By observing the results of its actions on the object\u27s orientation in image-space, the system forms a predictive forward empirical model. This acquired model is used on-line for manipulation planning and control as it improves. Rather than explicitly inverting the forward model to achieve trajectory control, a stochastic action selection technique [Moore, 1990] is used to select the most informative and promising actions, thereby integrating active perception and learning by combining on-line improvement, task-directed exploration, and model exploitation. Simulation and experimental results of the approach are presented

    Object Retrieval Strategy in Unstructured Environments Using Active Vision and a Medium Complexity Gripper

    Get PDF
    Depth information from a mobile laser stripe scanner mounted on a PUMA560 robot is used for simple thresholding by z-height and region growing. Superquadric surfaces are then fit to the regions segmented. This data reduction to three axis parameters, three Euler angles and two squareness parameters allows grasp planning using the PENN hand medium complexity end effector. Additionally, in order to take into account the spatial relationships between objects, they are grouped according to a nearest neighbor measure by distance between centroids in the χ-γ plane and also by height. The convex hull of the groups is then computed using Graham\u27s method. The convex hull object list permits objects with the best clearance for grasp to be identified, thus reducing the possibility of unwanted collisions during the enclosure phase. The geometric properties of the object are then used to determine whether an approach parallel or normal to the plane of support is necessary. This list of candidate grasps for the object is checked for intersections with the bounding boxes of neighboring objects and the finger trajectories. The most stable collision free grasp preshape which passes the intersection testing is chosen. If no grasp is collision free the next best object in terms of topology is chosen. Height clustering information is used to determine a baseline height for transporting objects in a collision free fashion. By combining these simple strategies of favoring objects at the exterior of groups and tall objects for initial grasping and removal, the chances for successful task completion are increased with minimal computational burden

    HEAP: A Sensory Driven Distributed Manipulation System

    Get PDF
    We address the problems of locating, grasping, and removing one or more unknown objects from a given area. In order to accomplish the task we use HEAP, a system of coordinating the motions of the hand and arm. HEAP also includes a laser range finer, mounted at the end of a PUMA 560, allowing the system to obtain multiple views of the workspace. We obtain volumetric information of the objects we locate by fitting superquadric surfaces on the raw range data. The volumetric information is used to ascertain the best hand configuration to enclose and constrain the object stably. The Penn Hand used to grasp the object, is fitted with 14 tactile sensors to determine the contact area and the normal components of the grasping forces. In addition the hand is used as a sensor to avoid any undesired collisions. The objective in grasping the objects is not to impart arbitrary forces on the object, but instead to be able to grasp a variety of objects using a simple grasping scheme assisted with a volumetric description and force and touch sensing

    Receptive Fields for the Determination of Textured Surface Inclination

    Get PDF
    The image of a uniformly textured inclined surface exhibits systematic distortions which affect the projection of the spatial frequencies of which the texture is composed. Using a set of filters having suitable spatial, frequency and orientation resolution, the inclination angle of the textured surface may be estimated from the resulting spatial frequency gradients. Psychophysical experiments suggest that, in absence of other cues, humans perceive surface inclination from perspective distortions, suggesting the possibility of a specific neuronal mechanism in the visual system. Beginning with a low level filter model found to be an accurate and economical model for simple cell receptive fields, we have developed both algorithmic machine vision and neural network models to investigate physiologically plausible mechanisms for this behavior. The two models are related through a new class of receptive field formed in the hidden layer of a neural network which learned to solve the problem. This receptive field can also be described analytically from the analysis developed for the algorithmic study. This paper, then, offers a prediction for a new type of receptive field in cortex which may be involved in the perception of inclined textured surfaces
    corecore